skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Ng, Nyx_L"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Misinformation is widespread, but only some people accept the false information they encounter. This raises two questions: Who falls for misinformation, and why do they fall for misinformation? To address these questions, two studies investigated associations between 15 individual-difference dimensions and judgments of misinformation as true. Using Signal Detection Theory, the studies further investigated whether the obtained associations are driven by individual differences in truth sensitivity, acceptance threshold, or myside bias. For both political misinformation (Study 1) and misinformation about COVID-19 vaccines (Study 2), truth sensitivity was positively associated with cognitive reflection and actively open-minded thinking, and negatively associated with bullshit receptivity and conspiracy mentality. Although acceptance threshold and myside bias explained considerable variance in judgments of misinformation as true, neither showed robust associations with the measured individual-difference dimensions. The findings provide deeper insights into individual differences in misinformation susceptibility and uncover critical gaps in their scientific understanding. 
    more » « less
  2. Recent years have seen a surge in research on why people fall for misinformation and what can be done about it. Drawing on a framework that conceptualizes truth judgments of true and false information as a signal-detection problem, the current article identifies three inaccurate assumptions in the public and scientific discourse about misinformation: (1) People are bad at discerning true from false information, (2) partisan bias is not a driving force in judgments of misinformation, and (3) gullibility to false information is the main factor underlying inaccurate beliefs. Counter to these assumptions, we argue that (1) people are quite good at discerning true from false information, (2) partisan bias in responses to true and false information is pervasive and strong, and (3) skepticism against belief-incongruent true information is much more pronounced than gullibility to belief-congruent false information. These conclusions have significant implications for person-centered misinformation interventions to tackle inaccurate beliefs. 
    more » « less
  3. A large body of research has investigated responses to artificial scenarios (e.g., trolley problem) where maximizing beneficial outcomes for the greater good (utilitarianism) conflicts with adherence to moral norms (deontology). The CNI model is a computational model that quantifies sensitivity to consequences for the greater good ( C), sensitivity to moral norms ( N), and general preference for inaction versus action ( I) in responses to plausible moral dilemmas based on real-world events. Expanding on a description of the CNI model, the current article provides (a) a comprehensive review of empirical findings obtained with the CNI model, (b) an analysis of their theoretical implications, (c) a discussion of criticisms of the CNI model, and (d) an overview of alternative approaches to disentangle multiple factors underlying moral-dilemma responses and the relation of these approaches to the CNI model. The article concludes with a discussion of open questions and new directions for future research. Public AbstractHow do people make judgments about actions that violate moral norms yet maximize the greater good (e.g., sacrificing the well-being of a small number of people for the well-being of a larger number of people)? Research on this question has been criticized for relying on highly artificial scenarios and for conflating multiple distinct factors underlying responses in moral dilemmas. The current article reviews research that used a computational modeling approach to disentangle the roles of multiple distinct factors in responses to plausible moral dilemmas based on real-world events. By disentangling sensitivity to consequences, sensitivity to moral norms, and general preference for inaction versus action in responses to realistic dilemmas, the reviewed work provides a more nuanced understanding of how people make judgments about the right course of action in moral dilemmas. 
    more » « less